assignation of zero moral value to AI’s experiences.
This seems like you are talking about some existing AI that already has a mechanism for having and evaluating its experiences. But this is not the case. We are discussing how to build an AI, and it seems like good idea to make an AI without experiences (if such words make sense), so it can’t be hurt by doing what we value. And if this were not possible, I assume we would try to make an AI that has goals compatible with us, so what makes us happy, makes AI happy too.
AI values will be created by us, just like our values were created by nature. We don’t suffer because we don’t have a different set of values. (Actually, we do suffer because we have conflicting values, so we are often not able to satisfy all of them, but that’s another topic.) For example I would feel bad about a future without any form of art. But I would not feel bad about a future without any form of paperclips. Clippy would be probably horrified, and assuming a sympathetic view, ze would feel sorry that the blind gods of evolution have crippled me by denying me the ability to value paperclips. However, I don’t feel harmed by the lack of this value, I don’t suffer, I am perfectly OK with this situation. So by analogy, if we manage to create the AI with the right set of values, it will be perfectly OK with that situation too.
This seems like you are talking about some existing AI that already has a mechanism for having and evaluating its experiences. But this is not the case. We are discussing how to build an AI, and it seems like good idea to make an AI without experiences (if such words make sense), so it can’t be hurt by doing what we value. And if this were not possible, I assume we would try to make an AI that has goals compatible with us, so what makes us happy, makes AI happy too.
AI values will be created by us, just like our values were created by nature. We don’t suffer because we don’t have a different set of values. (Actually, we do suffer because we have conflicting values, so we are often not able to satisfy all of them, but that’s another topic.) For example I would feel bad about a future without any form of art. But I would not feel bad about a future without any form of paperclips. Clippy would be probably horrified, and assuming a sympathetic view, ze would feel sorry that the blind gods of evolution have crippled me by denying me the ability to value paperclips. However, I don’t feel harmed by the lack of this value, I don’t suffer, I am perfectly OK with this situation. So by analogy, if we manage to create the AI with the right set of values, it will be perfectly OK with that situation too.